skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Felmlee, Diane"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
  2. Social media platforms are accused repeatedly of creating environments in which women are bullied and harassed. We argue that online aggression toward women aims to reinforce traditional feminine norms and stereotypes. In a mixed methods study, we find that this type of aggression on Twitter is common and extensive and that it can spread far beyond the original target. We locate over 2.9 million tweets in one week that contain instances of gendered insults (e.g., “bitch,” “cunt,” “slut,” or “whore”)—averaging 419,000 sexist slurs per day. The vast majority of these tweets are negative in sentiment. We analyze the social networks of the conversations that ensue in several cases and demonstrate how the use of “replies,” “retweets,” and “likes” can further victimize a target. Additionally, we develop a sentiment classifier that we use in a regression analysis to compare the negativity of sexist messages. We find that words in a message that reinforce feminine stereotypes inflate the negative sentiment of tweets to a significant and sizeable degree. These terms include those insulting someone’s appearance (e.g., “ugly”), intellect (e.g., “stupid”), sexual experience (e.g., “promiscuous”), mental stability (e.g., “crazy”), and age (“old”). Messages enforcing beauty norms tend to be particularly negative. In sum, hostile, sexist tweets are strategic in nature. They aim to promote traditional, cultural beliefs about femininity, such as beauty ideals, and they shame victims by accusing them of falling short of these standards. Harassment on social media constitutes an everyday, routine occurrence, with researchers finding 9,764,583 messages referencing bullying on Twitter over the span of two years (Bellmore et al. 2015). In other words, Twitter users post over 13,000 bullying-related messages on a daily basis. Forms of online aggression also carry with them serious, negative consequences. Repeated research documents that bullying victims suffer from a host of deleterious outcomes, such as low self-esteem (Hinduja and Patchin 2010), emotional and psychological distress (Ybarra et al. 2006), and negative emotions (Faris and Felmlee 2014; Juvonen and Gross 2008). Compared to those who have not been attacked, victims also tend to report more incidents of suicide ideation and attempted suicide (Hinduja and Patchin 2010). Several studies document that the targets of cyberbullying are disproportionately women (Backe et al. 2018; Felmlee and Faris 2016; Hinduja and Patchin 2010; Pew Research Center 2017), although there are exceptions depending on definitions and venues. Yet, we know little about the content or pattern of cyber aggression directed toward women in online forums. The purpose of the present research, therefore, is to examine in detail the practice of aggressive messaging that targets women and femininity within the social media venue of Twitter. Using both qualitative and quantitative analyses, we investigate the role of gender norm regulation in these patterns of cyber aggression. 
    more » « less
  3. null (Ed.)
    Online aggression represents a serious, and regularly occurring, social problem. In this piece the authors consider derogatory, harmful messages on the social media platform, Twitter, that target one of three groups of women, Asians, Blacks, and Latinx. The research focuses on messages that include one of the most common female slurs, “b!tch.” The findings of this chapter reveal that aggressive messages oriented toward women of color can be vicious and easily accessible (located in fewer than 30 seconds). Using an intersectional approach, the authors note the distinctive experiences of online harassment for women of color. The findings highlight the manner in which detrimental stereotypes are reinforced, including that of the “eroticized and obedient Asian woman,” the “angry Black woman,” and the “poor Latinx woman.” In some exceptions, women use the term “b!tch” in a positive and empowering manner, likely in an attempt to “reclaim” one of the common words used to attack females. Applying a social network perspective, we illustrate the tendency of typically hostile tweets to develop into interactive network conversations, where the original message spreads beyond the victim, and in the case of public individuals, quite widely. This research contributes to a deeper understanding of the processes that lead to online harassment, including the fortification of typical norms and social dominance. Finally, the authors find that messages that use the word “b!tch” to insult Asian, Black, and Latinx women are particularly damaging in that they reinforce traditional stereotypes of women and ethnoracial minorities, and these messages possess the ability to extend to wider audiences. 
    more » « less
  4. The authors use the timing of a change in Twitter’s rules regarding abusive content to test the effectiveness of organizational policies aimed at stemming online harassment. Institutionalist theories of social control suggest that such interventions can be efficacious if they are perceived as legitimate, whereas theories of psychological reactance suggest that users may instead ratchet up aggressive behavior in response to the sanctioning authority. In a sample of 3.6 million tweets spanning one month before and one month after Twitter’s policy change, the authors find evidence of a modest positive shift in the average sentiment of tweets with slurs targeting women and/or African Americans. The authors further illustrate this trend by tracking the network spread of specific tweets and individual users. Retweeted messages are more negative than those not forwarded. These patterns suggest that organizational “anti-abuse” policies can play a role in stemming hateful speech on social media without inflaming further abuse. 
    more » « less